non-convex isotonic regression
Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization
We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time.
Reviews: Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization
In this paper the authors consider the problem of minimizing a continuous submodular function subject to ordering (isotonic) constraints. They first show that the problem can be solved if we first discretize it (per coordinate, not in [0,1] n), and then solve the resulting discrete optimization problem using convex optimization. The fact that the problem is solvable in polynomial time is of course not surprising, because, as pointed out by the authors in lines 29-36, we can add a penalty to the objective that will implicitly enforce the constraints. However, this can significantly increase the Lipschitz constant of the objective, and that is why the authors take on an alternative approach. First, they prove that seen in the space of measures, the isotonic constraints correspond to dominating inequalities of the CDFs, which I guess is an intuitive result given the results known for the unconstrained case.
Efficient Algorithms for Non-convex Isotonic Regression through Submodular Optimization
We consider the minimization of submodular functions subject to ordering constraints. We show that this potentially non-convex optimization problem can be cast as a convex optimization problem on a space of uni-dimensional measures, with ordering constraints corresponding to first-order stochastic dominance. We propose new discretization schemes that lead to simple and efficient algorithms based on zero-th, first, or higher order oracles; these algorithms also lead to improvements without isotonic constraints. Finally, our experiments show that non-convex loss functions can be much more robust to outliers for isotonic regression, while still being solvable in polynomial time. Papers published at the Neural Information Processing Systems Conference.